Automated Essay Scoring by Maximizing Human-Machine Agreement
نویسندگان
چکیده
Previous approaches for automated essay scoring (AES) learn a rating model by minimizing either the classification, regression, or pairwise classification loss, depending on the learning algorithm used. In this paper, we argue that the current AES systems can be further improved by taking into account the agreement between human and machine raters. To this end, we propose a rankbased approach that utilizes listwise learning to rank algorithms for learning a rating model, where the agreement between the human and machine raters is directly incorporated into the loss function. Various linguistic and statistical features are utilized to facilitate the learning algorithms. Experiments on the publicly available English essay dataset, Automated Student Assessment Prize (ASAP), show that our proposed approach outperforms the state-of-the-art algorithms, and achieves performance comparable to professional human raters, which suggests the effectiveness of our proposed method for automated essay scoring.
منابع مشابه
Toward Evaluation of Writing Style: Finding Overly Repetitive Word Use in Student Essays
Automated essay scoring is now an established capability used from elementary school through graduate school for purposes of instruction and assessment. Newer applications provide automated diagnostic feedback about student writing. Feedback includes errors in grammar, usage, and mechanics, comments about writing style, and evaluation of discourse structure. This paper reports on a system that ...
متن کاملAutomated Essay Scoring and The Repair of Electronics
The Hewlett Foundation sponsored the Automated Student Assessment Prize on kaggle.com, challenging teams to produce essay evaluation models that best approximate human graders. Contestants predicted the scores of standardized-testing essays from grades 7-10. Teams were provided with 8 sets of labeled training data. Each set corresponds to a different essay prompt, grading rubric, and range of p...
متن کاملEnriching Automated Essay Scoring Using Discourse Marking
Electronic Essay Rater (e-rater) is a prototype automated essay scoring system built at Educational Testing Service (ETS) that uses discourse marking, in addition to syntactic information and topical content vector analyses to automatically assign essay scores. This paper gives a general description ore-rater as a whole, but its emphasis is on the importance of discourse marking and argument pa...
متن کاملAutomated essay scoring and the future of educational assessment in medical education.
CONTEXT Constructed-response tasks, which range from short-answer tests to essay questions, are included in assessments of medical knowledge because they allow educators to measure students' ability to think, reason, solve complex problems, communicate and collaborate through their use of writing. However, constructed-response tasks are also costly to administer and challenging to score because...
متن کاملAutomated Essay Grading Using Machine Learning
The project aims to build an automated essay scoring system using a data set of ≈13000 essays from kaggle.com. These essays were divided into 8 di erent sets based on context. We extracted features such as total word count per essay, sentence count, number of long words, part of speech counts etc from the training set essays. We used a linear regression model to learn from these features and ge...
متن کامل